Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available May 24, 2026
-
Free, publicly-accessible full text available May 1, 2026
-
Abstract In recent years, longer and heavier trains have become more common, primarily driven by efficiency and cost‐saving measures in the railroad industry. Regulation of train length is currently under consideration in the United States at both the federal and state levels, because of concerns that longer trains may have a higher risk of derailment, but the relationship between train length and risk of derailment is not yet well understood. In this study, we use data on freight train accidents during the 2013–2022 period from the Federal Railroad Administration (FRA) Rail Equipment Accident and Highway‐Rail Grade Crossing Accident databases to estimate the relationship between freight train length and the risk of derailment. We determine that longer trains do have a greater risk of derailment. Based on our analysis, running 100‐car trains is associated with 1.11 (95% confidence interval: 1.10–1.12) times the derailment odds of running 50‐car trains (or a 11% increase), even accounting for the fact that only half as many 100‐car trains would need to run. For 200‐car trains, the odds increase by 24% (odds ratio 1.24, 95% confidence interval: 1.20–1.28), again accounting for the need for fewer trains. Understanding derailment risk is an important component for evaluating the overall safety of the rail system and for the future development and regulation of freight rail transportation. Given the limitations of the current data on freight train length, this study provides an important step toward such an understanding.more » « less
-
Abstract Systems thinking (ST) includes a set of critical skills and approaches for addressing today's complex societal problems. Therefore, it has been introduced into the curricula of many educational programmes around the world. Despite all the attention to ST, there is less consensus when it comes to evaluating and assessing ST skills. Particularly, a quantitative assessment approach that captures ST's multi‐dimensionality is crucial when evaluating the degree to which one has learned and utilizes ST. This paper proposes a systematic approach to create such a multi‐dimensional Index of ST from textual data. Initially, we provide an overview of the theoretical background as it pertains to different measurement approaches of ST skills. Then we provide a conceptual framework based on ST skill measures and transform this framework into a quantifiable model. Finally, using student data, we provide an illustration of an integrated index of ST skills. We compute this index by using a mixed methods approach, including robust principal component analysis, data envelopment analysis and two‐staged bootstrapping approach. The results show that (i) our model serves as a systematic multi‐dimensional ST approach by including multiple measures of ST skills and (ii) international students and self‐reported math skills are found as significant predictors of one's level of ST in the graduate student dataset (N = 30), however no significant factors are found in the first‐year engineering student dataset (N = 144).more » « less
-
Abstract Self-report assessments are used frequently in higher education to assess a variety of constructs, including attitudes, opinions, knowledge, and competence. Systems thinking is an example of one competence often measured using self-report assessments where individuals answer several questions about their perceptions of their own skills, habits, or daily decisions. In this study, we define systems thinking as the ability to see the world as a complex interconnected system where different parts can influence each other, and the interrelationships determine system outcomes. An alternative, less-common, assessment approach is to measure skills directly by providing a scenario about an unstructured problem and evaluating respondents’ judgment or analysis of the scenario (scenario-based assessment). This study explored the relationships between engineering students’ performance on self-report assessments and scenario-based assessments of systems thinking, finding that there were no significant relationships between the two assessment techniques. These results suggest that there may be limitations to using self-report assessments as a method to assess systems thinking and other competencies in educational research and evaluation, which could be addressed by incorporating alternative formats for assessing competence. Future work should explore these findings further and support the development of alternative assessment approaches.more » « less
-
Abstract Cognitive maps, or mental maps, are externalized portrayals of mental models—people's mental representations of reality and their presumptions about how the world works. They are often used as the intermediary step toward uncovering individuals' presumptions of the outside world. Yet, the next step is often vague: once one's understanding of the real world is mapped, how can we systematically evaluate the maps and compare and contrast them? In this note, we review several common approaches to analyzing cognitive maps, some rooted in network theories, and apply them to a dataset of 30 graduate students who analyzed a complex socioenvironmental problem. Our analysis shows that these methods provide inconsistent results and often fall short of capturing variations in mental models. The analysis points to a lack of effective methods for examining such maps and helps articulate a major research problem for systems‐thinking scholars. © 2023 System Dynamics Society.more » « less
-
The Lake Urmia vignette: a tool to assess understanding of complexity in socio‐environmental systemsAbstract We introduce the Lake Urmia Vignette (LUV) as a tool to assess individuals' understanding of complexity in socio‐environmental systems. LUV is based on a real‐world case and includes a short vignette describing an environmental catastrophe involving a lake. Over a few decades, significant issues have manifested themselves at the lake because of various social, political, economic, and environmental factors. We design a rubric for assessing responses to a prompt. A pilot test with a sample of 30 engineering graduate students is conducted. We compare responses to LUV with other measures. Our findings suggest that students' understanding of complexity is positively associated with their understanding of systems concepts such as feedback loops but not with other possible variables such as self‐reported systems thinking skills or systems‐related coursework. Based on the provided instructions, researchers can use LUV as a novel assessment tool to examine understanding of complexity in socio‐environmental systems. © 2020 System Dynamics Societymore » « less
An official website of the United States government
